936 research outputs found

    Off-shell hydrodynamics from holography

    Get PDF
    We outline a program for obtaining an action principle for dissipative fluid dynamics by considering the holographic Wilsonian renormalization group applied to systems with a gravity dual. As a first step, in this paper we restrict to systems with a non-dissipative horizon. By integrating out gapped degrees of freedom in the bulk gravitational system between an asymptotic boundary and a horizon, we are led to a formulation of hydrodynamics where the dynamical variables are not standard velocity and temperature fields, but the relative embedding of the boundary and horizon hypersurfaces. At zeroth order, this action reduces to that proposed by Dubovsky et al. as an off-shell formulation of ideal fluid dynamics.Comment: 34 pages, 2 figures; v2: references added, clarifications added in Sec. I

    Nationality Classification Using Name Embeddings

    Full text link
    Nationality identification unlocks important demographic information, with many applications in biomedical and sociological research. Existing name-based nationality classifiers use name substrings as features and are trained on small, unrepresentative sets of labeled names, typically extracted from Wikipedia. As a result, these methods achieve limited performance and cannot support fine-grained classification. We exploit the phenomena of homophily in communication patterns to learn name embeddings, a new representation that encodes gender, ethnicity, and nationality which is readily applicable to building classifiers and other systems. Through our analysis of 57M contact lists from a major Internet company, we are able to design a fine-grained nationality classifier covering 39 groups representing over 90% of the world population. In an evaluation against other published systems over 13 common classes, our F1 score (0.795) is substantial better than our closest competitor Ethnea (0.580). To the best of our knowledge, this is the most accurate, fine-grained nationality classifier available. As a social media application, we apply our classifiers to the followers of major Twitter celebrities over six different domains. We demonstrate stark differences in the ethnicities of the followers of Trump and Obama, and in the sports and entertainments favored by different groups. Finally, we identify an anomalous political figure whose presumably inflated following appears largely incapable of reading the language he posts in.Comment: 10 pages, 9 figures, 4 table, accepted by CIKM 2017, Demo and free API: www.name-prism.co

    Joint multiple dictionary learning for tensor sparse coding

    Get PDF
    Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms

    Tensor regression based on linked multiway parameter analysis

    Get PDF
    Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets

    Is Stochastic Gradient Descent Near Optimal?

    Full text link
    The success of neural networks over the past decade has established them as effective models for many relevant data generating processes. Statistical theory on neural networks indicates graceful scaling of sample complexity. For example, Joen & Van Roy (arXiv:2203.00246) demonstrate that, when data is generated by a ReLU teacher network with WW parameters, an optimal learner needs only O~(W/ϵ)\tilde{O}(W/\epsilon) samples to attain expected error ϵ\epsilon. However, existing computational theory suggests that, even for single-hidden-layer teacher networks, to attain small error for all such teacher networks, the computation required to achieve this sample complexity is intractable. In this work, we fit single-hidden-layer neural networks to data generated by single-hidden-layer ReLU teacher networks with parameters drawn from a natural distribution. We demonstrate that stochastic gradient descent (SGD) with automated width selection attains small expected error with a number of samples and total number of queries both nearly linear in the input dimension and width. This suggests that SGD nearly achieves the information-theoretic sample complexity bounds of Joen & Van Roy (arXiv:2203.00246) in a computationally efficient manner. An important difference between our positive empirical results and the negative theoretical results is that the latter address worst-case error of deterministic algorithms, while our analysis centers on expected error of a stochastic algorithm.Comment: arXiv admin note: substantial text overlap with arXiv:2203.0024

    Quantum teleportation implies symmetry-protected topological order

    Full text link
    We constrain a broad class of teleportation protocols using insights from locality. In the "standard" teleportation protocols we consider, all outcome-dependent unitaries are Pauli operators conditioned on linear functions of the measurement outcomes. We find that all such protocols involve preparing a "resource state" exhibiting symmetry-protected topological (SPT) order with Abelian protecting symmetry Gk=(Z2×Z2)k\mathcal{G}_{k}= (\mathbb{Z}_2 \times \mathbb{Z}_2)^k. The kk logical states are teleported between the edges of the chain by measuring the corresponding 2k2k string order parameters in the bulk and applying outcome-dependent Paulis. Hence, this single class of nontrivial SPT states is both necessary and sufficient for the standard teleportation of kk qubits. We illustrate this result with several examples, including a nonstabilizer hypergraph state.Comment: 33 pages, 8 figure

    Long-range-enhanced surface codes

    Full text link
    The surface code is a quantum error-correcting code for one logical qubit, protected by spatially localized parity checks in two dimensions. Due to fundamental constraints from spatial locality, storing more logical qubits requires either sacrificing the robustness of the surface code against errors or increasing the number of physical qubits. We bound the minimal number of spatially non-local parity checks necessary to add logical qubits to a surface code while maintaining, or improving, robustness to errors. We asymptotically saturate this bound using a family of hypergraph product codes, interpolating between the surface code and constant-rate low-density parity-check codes. Fault-tolerant protocols for logical operations generalize naturally to these longer-range codes, based on those from ordinary surface codes. We provide near-term practical implementations of this code for hardware based on trapped ions or neutral atoms in mobile optical tweezers. Long-range-enhanced surface codes outperform conventional surface codes using hundreds of physical qubits, and represent a practical strategy to enhance the robustness of logical qubits to errors in near-term devices.Comment: 16 pages, 12 figures; v2 changes: fixed typos and added citation
    • …
    corecore